66 research outputs found
Early and mid-term outcome of patients with low-flow-low-gradient aortic stenosis treated with newer-generation transcatheter aortic valves
Patients with non-paradoxical low-flow-low-gradient (LFLG) aortic stenosis (AS) are at increased surgical risk, and thus, they may particularly benefit from transcatheter aortic valve replacement (TAVR). However, data on this issue are still limited and based on the results with older-generation transcatheter heart valves (THVs). The aim of this study was to investigate early and mid-term outcome of TAVR with newer-generation THVs in the setting of LFLG AS. Data for the present analysis were gathered from the OBSERVANT II dataset, a national Italian observational, prospective, multicenter cohort study that enrolled 2,989 consecutive AS patients who underwent TAVR at 30 Italian centers between December 2016 and September 2018, using newer-generation THVs. Overall, 420 patients with LVEF <= 50% and mean aortic gradient <40 mmHg were included in this analysis. The primary outcomes were 1-year all-cause mortality and a combined endpoint including all-cause mortality and hospital readmission due to congestive heart failure (CHF) at 1 year. A risk-adjusted analysis was performed to compare the outcome of LFLG AS patients treated with TAVR (n = 389) with those who underwent surgical aortic valve replacement (SAVR, n = 401) from the OBSERVANT I study. Patients with LFLG AS undergoing TAVR were old (mean age, 80.8 +/- 6.7 years) and with increased operative risk (mean EuroSCORE II, 11.5 +/- 10.2%). VARC-3 device success was 83.3% with 7.6% of moderate/severe paravalvular leak. Thirty-day mortality was 3.1%. One-year all-cause mortality was 17.4%, and the composite endpoint was 34.8%. Chronic obstructive pulmonary disease (HR 1.78) and EuroSCORE II (HR 1.02) were independent predictors of 1-year mortality, while diabetes (HR 1.53) and class NYHA IV (HR 2.38) were independent predictors of 1-year mortality or CHF. Compared with LFLG AS treated with SAVR, TAVR patients had a higher rate of major vascular complications and permanent pacemaker, while SAVR patients underwent more frequently to blood transfusion, cardiogenic shock, AKI, and MI. However, 30-day and 1-year outcomes were similar between groups. Patients with non-paradoxical LFLG AS treated by TAVR were older and with higher surgical risk compared with SAVR patients. Notwithstanding, TAVR was safe and effective with a similar outcome to SAVR at both early and mid-term
Informatics for Health 2017 : advancing both science and practice
Conference report, The Informatics for Health congress, 24-26 April 2017, in Manchester, UK.Introduction : The Informatics for Health congress, 24-26 April 2017, in Manchester, UK, brought together the Medical Informatics Europe (MIE) conference and the Farr Institute International Conference. This special issue of the Journal of Innovation in Health Informatics contains 113 presentation abstracts and 149 poster abstracts from the congress. Discussion : The twin programmes of âBig Dataâ and âDigital Healthâ are not always joined up by coherent policy and investment priorities. Substantial global investment in health IT and data science has led to sound progress but highly variable outcomes. Society needs an approach that brings together the science and the practice of health informatics. The goal is multi-level Learning Health Systems that consume and intelligently act upon both patient data and organizational intervention outcomes. Conclusions : Informatics for Health demonstrated the art of the possible, seen in the breadth and depth of our contributions. We call upon policy makers, research funders and programme leaders to learn from this joined-up approach.Publisher PDFPeer reviewe
Presentation of laboratory test results in patient portals: influence of interface design on risk interpretation and visual search behaviour
Abstract Background Patient portals are considered valuable instruments for self-management of long term conditions, however, there are concerns over how patients might interpret and act on the clinical information they access. We hypothesized that visual cues improve patientsâ abilities to correctly interpret laboratory test results presented through patient portals. We also assessed, by applying eye-tracking methods, the relationship between risk interpretation and visual search behaviour. Methods We conducted a controlled study with 20 kidney transplant patients. Participants viewed three different graphical presentations in each of low, medium, and high risk clinical scenarios composed of results for 28 laboratory tests. After viewing each clinical scenario, patients were asked how they would have acted in real life if the results were their own, as a proxy of their risk interpretation. They could choose between: 1) Calling their doctor immediately (high interpreted risk); 2) Trying to arrange an appointment within the next 4 weeks (medium interpreted risk); 3) Waiting for the next appointment in 3 months (low interpreted risk). For each presentation, we assessed accuracy of patientsâ risk interpretation, and employed eye tracking to assess and compare visual search behaviour. Results Misinterpretation of risk was common, with 65% of participants underestimating the need for action across all presentations at least once. Participants found it particularly difficult to interpret medium risk clinical scenarios. Participants who consistently understood when action was needed showed a higher visual search efficiency, suggesting a better strategy to cope with information overload that helped them to focus on the laboratory tests most relevant to their condition. Conclusions This study confirms patientsâ difficulties in interpreting laboratories test results, with many patients underestimating the need for action, even when abnormal values were highlighted or grouped together. Our findings raise patient safety concerns and may limit the potential of patient portals to actively involve patients in their own healthcare
Recommended from our members
Combining macula clinical signs and patient characteristics for age-related macular degeneration diagnosis: a machine learning approach
Background: To investigate machine learning methods, ranging from simpler interpretable techniques to complex (non-linear) âblack-boxâ approaches, for automated diagnosis of Age-related Macular Degeneration (AMD).
Methods: Data from healthy subjects and patients diagnosed with AMD or other retinal diseases were collected during routine visits via an Electronic Health Record (EHR) system. Patientsâ attributes included demographics and, for each eye, presence/absence of major AMD-related clinical signs (soft drusen, retinal pigment epitelium, defects/ pigment mottling, depigmentation area, subretinal haemorrhage, subretinal fluid, macula thickness, macular scar, subretinal fibrosis). Interpretable techniques known as white box methods including logistic regression and decision trees as well as less interpreitable techniques known as black box methods, such as support vector machines (SVM), random forests and AdaBoost, were used to develop models (trained and validated on unseen data) to diagnose AMD. The gold standard was confirmed diagnosis of AMD by physicians. Sensitivity, specificity and area under the receiver operating characteristic (AUC) were used to assess performance.
Results: Study population included 487 patients (912 eyes). In terms of AUC, random forests, logistic regression and adaboost showed a mean performance of (0.92), followed by SVM and decision trees (0.90). All machine learning models identified soft drusen and age as the most discriminating variables in cliniciansâ decision pathways to diagnose AMD. C
Conclusions: Both black-box and white box methods performed well in identifying diagnoses of AMD and their decision pathways. Machine learning models developed through the proposed approach, relying on clinical signs identified by retinal specialists, could be embedded into EHR to provide physicians with real time (interpretable) support
Acute kidney injury in the UK:a replication cohort study of the variation across three regional populations
Objectives
A rapid growth in the reported rates of acute kidney injury (AKI) has led to calls for greater attention and greater resources for improving care. However, the reported incidence of AKI also varies more than tenfold between previous studies. Some of this variation is likely to stem from methodological heterogeneity. This study explores the extent of cross-population variation in AKI incidence after minimising heterogeneity.
Design
Population-based cohort study analysing data from electronic health records from three regions in the UK through shared analysis code and harmonised methodology. Setting Three populations from Scotland, Wales and England covering three time periods: Grampian 2003, 2007 and 2012; Swansea 2007; and Salford 2012.
Participants
All residents in each region, aged 15 years or older. Main outcome measures Population incidence of AKI and AKI phenotype (severity, recovery, recurrence). Determined using shared biochemistry-based AKI episode code and standardised by age and sex.
Results
Respectively, crude AKI rates (per 10 000/year) were 131, 138, 139, 151 and 124 (p=0.095), and after standardisation for age and sex: 147, 151, 146, 146 and 142 (p=0.257) for Grampian 2003, 2007 and 2012; Swansea 2007; and Salford 2012. The pattern of variation in crude rates was robust to any modifications of the AKI definition. Across all populations and time periods, AKI rates increased substantially with age from âËÂź20 to âËÂź550 per 10 000/year among those aged <40 and ââ°ÂĽ70 years.
Conclusion
When harmonised methods are used and age and sex differences are accounted for, a similar high burden of AKI is consistently observed across different populations and time periods (âËÂź150 per 10 000/year). There are particularly high rates of AKI among older people. Policy-makers should be careful not draw simplistic assumptions about variation in AKI rates based on comparisons that are not rigorous in methodological terms
Prolonged higher dose methylprednisolone vs. conventional dexamethasone in COVID-19 pneumonia: a randomised controlled trial (MEDEAS)
Dysregulated systemic inflammation is the primary driver of mortality in severe COVID-19 pneumonia. Current guidelines favor a 7-10-day course of any glucocorticoid equivalent to dexamethasone 6â
mg¡day-1. A comparative RCT with a higher dose and a longer duration of intervention was lacking
- âŚ